6 research outputs found

    Temporal HeartNet: Towards Human-Level Automatic Analysis of Fetal Cardiac Screening Video

    Full text link
    We present an automatic method to describe clinically useful information about scanning, and to guide image interpretation in ultrasound (US) videos of the fetal heart. Our method is able to jointly predict the visibility, viewing plane, location and orientation of the fetal heart at the frame level. The contributions of the paper are three-fold: (i) a convolutional neural network architecture is developed for a multi-task prediction, which is computed by sliding a 3x3 window spatially through convolutional maps. (ii) an anchor mechanism and Intersection over Union (IoU) loss are applied for improving localization accuracy. (iii) a recurrent architecture is designed to recursively compute regional convolutional features temporally over sequential frames, allowing each prediction to be conditioned on the whole video. This results in a spatial-temporal model that precisely describes detailed heart parameters in challenging US videos. We report results on a real-world clinical dataset, where our method achieves performance on par with expert annotations.Comment: To appear in MICCAI, 201

    Real-time standard scan plane detection and localisation in fetal ultrasound using fully convolutional neural networks

    Get PDF
    Fetal mid-pregnancy scans are typically carried out according to fixed protocols. Accurate detection of abnormalities and correct biometric measurements hinge on the correct acquisition of clearly defined standard scan planes. Locating these standard planes requires a high level of expertise. However, there is a worldwide shortage of expert sonographers. In this paper, we consider a fully automated system based on convolutional neural networks which can detect twelve standard scan planes as defined by the UK fetal abnormality screening programme. The network design allows real-time inference and can be naturally extended to provide an approximate localisation of the fetal anatomy in the image. Such a framework can be used to automate or assist with scan plane selection, or for the retrospective retrieval of scan planes from recorded videos. The method is evaluated on a large database of 1003 volunteer mid-pregnancy scans. We show that standard planes acquired in a clinical scenario are robustly detected with a precision and recall of 69 % and 80 %, which is superior to the current state-of-the-art. Furthermore, we show that it can retrospectively retrieve correct scan planes with an accuracy of 71 % for cardiac views and 81 % for non-cardiac views

    Automatic Probe Movement Guidance for Freehand Obstetric Ultrasound

    Full text link
    We present the first system that provides real-time probe movement guidance for acquiring standard planes in routine freehand obstetric ultrasound scanning. Such a system can contribute to the worldwide deployment of obstetric ultrasound scanning by lowering the required level of operator expertise. The system employs an artificial neural network that receives the ultrasound video signal and the motion signal of an inertial measurement unit (IMU) that is attached to the probe, and predicts a guidance signal. The network termed US-GuideNet predicts either the movement towards the standard plane position (goal prediction), or the next movement that an expert sonographer would perform (action prediction). While existing models for other ultrasound applications are trained with simulations or phantoms, we train our model with real-world ultrasound video and probe motion data from 464 routine clinical scans by 17 accredited sonographers. Evaluations for 3 standard plane types show that the model provides a useful guidance signal with an accuracy of 88.8% for goal prediction and 90.9% for action prediction.Comment: Accepted at the 23rd International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2020

    Object Classification in an Ultrasound Video Using LP-SIFT Features

    No full text

    Toward point-of-care ultrasound estimation of fetal gestational age from the trans-cerebellar diameter using CNN-based ultrasound image analysis.

    No full text
    Obstetric ultrasound is a fundamental ingredient of modern prenatal care with many applications including accurate dating of a pregnancy, identifying pregnancy-related complications, and diagnosis of fetal abnormalities. However, despite its many benefits, two factors currently prevent wide-scale uptake of this technology for point-of-care clinical decision-making in low- and middle-income country (LMIC) settings. First, there is a steep learning curve for scan proficiency, and second, there has been a lack of easy-to-use, affordable, and portable ultrasound devices. We introduce a framework toward addressing these barriers, enabled by recent advances in machine learning applied to medical imaging. The framework is designed to be realizable as a point-of-care ultrasound (POCUS) solution with an affordable wireless ultrasound probe, a smartphone or tablet, and automated machine-learning-based image processing. Specifically, we propose a machine-learning-based algorithm pipeline designed to automatically estimate the gestational age of a fetus from a short fetal ultrasound scan. We present proof-of-concept evaluation of accuracy of the key image analysis algorithms for automatic head transcerebellar plane detection, automatic transcerebellar diameter measurement, and estimation of gestational age on conventional ultrasound data simulating the POCUS task and discuss next steps toward translation via a first application on clinical ultrasound video from a low-cost ultrasound probe

    Overview of the 2014 Workshop on Medical Computer Vision—Algorithms for Big Data (MCV 2014)

    No full text
    The 2014 workshop on medical computer vision (MCV): algorithms for big data took place in Cambridge, MA, USA in connection with MICCAI (Medical Image Computing for Computer Assisted Intervention). It is the fourth MICCAI MCV workshop after those held in 2010, 2012 and 2013 with another edition held at CVPR 2012. This workshop aims at exploring the use of modern computer vision technology in tasks such as automatic segmentation and registration, localisation of anatomical features and extraction of meaningful visual features. It emphasises questions of harvesting, organising and learning from large-scale medical imaging data sets and general-purpose automatic understanding of medical images. The workshop is especially interested in modern, scalable and efficient algorithms which generalise well to previously unseen images.The strong participation in the workshop of over 80 persons shows the importance of and interest in Medical Computer Vision. This overview article describes the papers presented in the workshop as either oral presentations or short presentations and posters. It also describes the invited talks and the results of the VISCERAL session in the workshop on the use of big data in medical imaging
    corecore